products |
Target Applications |
Advantage |
Main Feature |
Tesla P40
|
deep learning inference |
30 times the CPU server deep learning inference speed |
> 47 TOPS INT8 Integer Operations
> 12 TeraFLOPS Single-Precision Performance
> 24 GB GDDR5 Memory
>1x Decode Engine, 2x Encode Engines |
Tesla P100
|
HPC and deep learning |
For HPC and deep learning
A single P100 server is an alternative
32 CPU servers |
> 4.7 TeraFLOPS Double-Precision Performance
> 9.3 TeraFLOPS Single-Precision Performance
> 720 GB/s Memory bandwidth(540 GB/soptional)
> 16 GB H |
Tesla P4
|
deep learning inference and Video transcoding |
In terms of reasoning 40 times the energy efficiency of the CPU |
> 22 TOPS INT8 Integer Operations
> 5.5 TeraFLOPS Single-Precision Performance
>16 GB HBM2 GPU Memory
> 50 W/75 W Power
|
Tesla V100
|
Artificial intelligence and high performance computing |
100 CPU performance in one GPU |
> 112 TeraFLOPS deep learning
> 14 TeraFLOPS Single-Precision Performance
> 250 W Power
|